-
Notifications
You must be signed in to change notification settings - Fork 90
Gzip response bodies #1448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Gzip response bodies #1448
Conversation
| /// response.extensions_mut().insert(NoCompression); | ||
| /// ``` | ||
| #[derive(Debug, Clone, Copy)] | ||
| pub struct NoCompression; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a way for a dropshot handler to tell dropshot not to compress the response even though it might otherwise compress it. That's neat, but happy to get rid of it if we don't need it.
| [dependencies.tokio-util] | ||
| version = "0.7" | ||
| features = [ "io", "compat" ] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Used for converting between AsyncRead/AsyncWrite from async-compression and the streams used by Body from hyper.
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn test_gzip_compression_with_accept_encoding() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the primary happy path test for compression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave this a look over and it looks good; let me know if you'd like me to look in more detail. One thing I'm unsure of is how the content-length is set given the streaming compression. It should reflect the compressed size rather than the original size--is that right? Thanks for doing this.
|
I was confused about what is supposed to happen if the client can't handle a streaming response, but it seems all HTTP/1.1 clients must be able to handle chunked transfer encoding:
https://datatracker.ietf.org/doc/html/rfc7230#section-3.3.1 Full answer from GPT-5 through the Codex CLI below. Pretty helpful. The line "HTTP/1.1 clients that advertise Dropshot only decides to gzip when Why this doesn't break non-streaming clients: HTTP/1.1 clients that advertise Matrix (server view vs client capability):
If you have a population of |
|
Thanks for the clarification. I see in the tests how you're validating the content-length header, etc. Should compression be configurable in the server settings? i.e. would it make sense to e.g. have compression=on be the default but allow one to make a server that never compresses responses? |
|
Honestly not sure. Kind of hard to imagine why you would want to not compress if a client asked for it. But on the other hand it seems heavy-handed to just do it. |
|
I added the config option (default true) and I used some helpers to shorten the tests a bit, which to me makes them a little more scannable, though you might prefer things inlined. I think this is ready for a real review. |
Closes #221
Needed a break so I kinda went to town here -- many rounds of review and iteration with both Claude Code and Codex. It's long as hell but it's almost all tests. Leaving as a draft for now because I want to go through it in more detail and write up a better description, but it seems pretty legit.